AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
128K context

# 128K context

Delta Pavonis Qwen 14B
Apache-2.0
Enhanced reasoning model based on Qwen-2.5 14B architecture, optimized for general reasoning and Q&A scenarios, supporting 128K context and 8K output
Large Language Model Transformers
D
prithivMLmods
547
3
Gemma 3 27b It GGUF
Gemma-3-27b-it-GGUF is a quantized version based on Google's Gemma-3-27b-it model, suitable for image text-to-text tasks.
Text-to-Image Transformers
G
second-state
2,024
0
Condor Opus 14B Exp
Apache-2.0
Condor-Opus-14B-Exp is a large language model based on the Qwen 2.5 14B modal architecture, focusing on enhanced reasoning capabilities, supporting multilingual and long-context processing.
Large Language Model Transformers Supports Multiple Languages
C
prithivMLmods
99
2
Serpens Opus 14B Exp
Apache-2.0
Serpens-Opus-14B-Exp is a 14-billion-parameter model based on the Qwen 2.5 14B architecture, designed to enhance reasoning capabilities for general-purpose reasoning and Q&A tasks.
Large Language Model Transformers Supports Multiple Languages
S
prithivMLmods
158
1
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase